Nearly Optimal Private LASSO
نویسندگان
چکیده
We present a nearly optimal differentially private version of the well known LASSO estimator. Our algorithm provides privacy protection with respect to each training example. The excess risk of our algorithm, compared to the non-private version, is Õ(1/n), assuming all the input data has bounded `∞ norm. This is the first differentially private algorithm that achieves such a bound without the polynomial dependence on p under no additional assumptions on the design matrix. In addition, we show that this error bound is nearly optimal amongst all differentially private algorithms.
منابع مشابه
Differentially Private Model Selection via Stability Arguments and the Robustness of the Lasso
We design differentially private algorithms for statistical model selection. Given a data set and a large, discrete collection of “models”, each of which is a family of probability distributions, the goal is to determine the model that best “fits” the data. This is a basic problem in many areas of statistics and machine learning. We consider settings in which there is a well-defined answer, in ...
متن کاملDifferentially Private Feature Selection via Stability Arguments, and the Robustness of the Lasso
We design differentially private algorithms for statistical model selection. Given a data set and alarge, discrete collection of “models”, each of which is a family of probability distributions, the goal isto determine the model that best “fits” the data. This is a basic problem in many areas of statistics andmachine learning.We consider settings in which there is a well-defined...
متن کاملReweighting the Lasso
This paper investigates how changing the growth rate of the sequence of penalty weights affects the asymptotics of Lasso-type estimators. The cases of non-singular and nearly singular design are considered.
متن کاملThe Lasso with Nearly Orthogonal Latin Hypercube Designs
We consider the Lasso problem when the input values need to take multiple levels. In this situation, we propose to use nearly orthogonal Latin hypercube designs, originally motivated by computer experiments, to significantly enhance the variable selection accuracy of the Lasso. The use of such designs ensures small column-wise correlations in variable selection and gives flexibility in identify...
متن کاملPenalized Linear Unbiased Selection
We introduce MC+, a fast, continuous, nearly unbiased, and accurate method of penalized variable selection in high-dimensional linear regression. The LASSO is fast and continuous, but biased. The bias of the LASSO interferes with variable selection. Subset selection is unbiased but computationally costly. The MC+ has two elements: a minimax concave penalty (MCP) and a penalized linear unbiased ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015